home *** CD-ROM | disk | FTP | other *** search
Text File | 1993-07-15 | 59.7 KB | 1,879 lines |
-
-
-
- - 1 -
- AP IX-43-E
-
-
- Recommendation X.290
-
- OSI CONFORMANCE TESTING METHODOLOGY AND FRAMEWORK
- FOR PROTOCOL RECOMMENDATIONS FOR CCITT APPLICATIONS
-
-
- The CCITT,
-
-
- considering
-
- (a) that Recommendation X.200 defines the Reference Model of Open Systems
- for CCITT Applications;
-
- (b) that the objective of OSI will not be completely achieved until systems
- dedicated to CCITT applications can be tested to determine whether they conform to
- the relevant OSI protocol Recommendations;
-
- (c) that standardized test suites should be developed for each OSI protocol
- Recommendation as a means to:
-
- - obtain wide acceptance and confidence in conformance test results
- produced by different testers,
-
- - provide confidence in the interoperability of equipments which
- passed the standardized conformance tests;
-
- (d) the need for defining an international Recommendation to specify the
- framework and general principles for the specification of conformance test suites and
- the testing of protocol implementations,
-
- unanimously declares the view that
-
- 1. the general principles, definition of terms and concepts of OSI protocol
- conformance testing shall be in accordance with Part 1 of this Recommendation;
-
- 2. the test methods, test suites, test notation shall be in accordance with Part 2
- of this Recommendation.
-
-
-
- CONTENTS
-
- PART 1 - GENERAL CONCEPTS
-
- 0. Introduction
-
- 1. Scope and Field of Application
-
- 2. References
-
- Section 1: Terminology
-
- 3. Definitions
-
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 2 -
- AP IX-43-E
-
-
- 4. Abbreviations
-
- Section 2: Overview
-
- 5. The Meaning of Conformance in OSI*
-
- 6. Conformance and testing
-
- 7. Test Methods
-
- 8. Test Suites
-
- 9. Relationships between Parts, Concepts and Roles
-
- 10. Compliance
-
- PART 2 - ABSTRACT TEST SUITE SPECIFICATION
-
- 0. Introduction
-
- 1. Scope and Field of Application
-
- 2. References
-
- 3. Definitions
-
- 4. Abbreviations
-
- 5. Compliance
-
- Section 1: Requirements on Protocol Specifiers
-
- 6. Conformance Requirements in OSI* Recommendations*
-
- 7. PICS Proformas
-
- Section 2: Requirements on Abstract Test Suite Specifiers
-
- 8. Test Suite Production Process
-
- 9. Determining Conformance Requirements and PICS
-
- 10. Test Suite Structure
-
- 11. Generic Test Case Specification
-
- 12. Abstract Test Methods
-
- 13. Specification of Abstract Test Suites
-
- 14. Use of an Abstract Test Suite Specification
-
- 15. Test Suite Maintenance
-
- Annex A: Options
-
-
-
- (3184)
-
-
-
-
-
- - 3 -
- AP IX-43-E
-
-
-
- Annex B: Guidance for Protocol Recommendations* writers
-
- Annex C: Incomplete Static Conformance Requirements
-
- Annex D: Tree and Tabular Combined Notation
-
- Appendix I: Applicability of the Test Methods to OSI* Protocols
-
- Appendix II: Index to Definitions of Terms
-
- Appendix III: Examples for guidance for PICS proforma
-
- Appendix IV: Example of choice of Abstract Test Methods
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 4 -
- AP IX-43-E
-
-
- PART 1: GENERAL CONCEPTS
-
- Introduction
-
- The objective of OSI will not be completely achieved until systems can be tested
- to determine whether they conform to the relevant "OSI or related CCITT X-Series or T-
- Series" (hereafter abbreviated to "OSI*") protocol "standard(s) or Recommendation(s)"
- (hereafter abbreviated to "Recommendation(s)*").
-
- Standardized test suites should be developed for each OSI* protocol
- Recommendation, for use by suppliers or implementors in self-testing, by users of OSI
- products, by the administrations* or other third party testers. This should lead to
- comparability and wide acceptance of test results produced by different testers, and
- thereby minimize the need for repeated conformance testing of the same system.
-
- The standardization of test suites requires international definition and
- acceptance of a common testing methodology and appropriate testing methods and
- procedures. It is the purpose of this Recommendation to define the methodology, to
- provide a framework for specifying conformance test suites and define the procedures to
- be followed during testing.
- Conformance testing involves testing both the capabilities and behaviour of an
- implementation and checking what is observed against the conformance requirements in
- the relevant Recommendation(s)* and against what the implementor states the
- implementation's capabilities are.
-
- Conformance testing does not include assessment of the performance nor the
- robustness or reliability of an implementation. It cannot give judgements on the
- physical realization of the abstract service primitives, how a system is implemented,
- how it provides any requested service, nor the environment of the protocol
- implementation. It cannot, except in an indirect way, prove anything about the logical
- design of the protocol itself.
-
- The purpose of conformance testing is to increase the probability that different
- implementations are able to interwork. This is achieved by verifying them by means of a
- protocol test suite, thereby increasing the confidence that each implementation
- conforms to the protocol specification. Confidence in conformance to a protocol
- specification is particularly important when equipment supplied by different vendors is
- required to interwork.
-
- However, it should be borne in mind that the complexity of most protocols makes
- exhaustive testing impractical on both technical and economic grounds. Also, testing
- cannot guarantee conformance to a specification since it detects errors rather than
- their absence. Thus conformance to a test suite alone cannot guarantee interworking.
- What it does do is give confidence that an implementation has the required capabilities
- and that its behaviour conforms consistently in representative instances of
- communication.
-
- It should be noted that the OSI reference model for CCITT applications
- (Recommendation X-200) states (in section 4.3):
-
- "Only the external behaviour of Open Systems is retained as the standard of
- behaviour of real Open Systems".
-
- This means that although aspects of both internal and external behaviour are
- described in OSI* Recommendations*, only the requirements on external behaviour have to
-
-
-
- (3184)
-
-
-
-
-
- - 5 -
- AP IX-43-E
-
-
- be met by real open systems. Although some of the methods defined in this
- Recommendation do impose certain constraints on the implementor, for example that there
- be some means of realizing control and observation at one or more service access
- points, it should be noted that other methods defined herein do not impose such
- constraints.
-
- However, in the case of partial OSI* end-systems which provide OSI* protocols up
- to a specific layer boundary, it is desirable to test both the external behaviour of
- the implemented protocol entities and the potential of those entities for supporting
- correct external behaviour in higher layers.
-
- Detailed investigation of relative benefits, efficiency and constraints of all
- methods is addressed in various parts of this Recommendation. However, any organization
- contemplating the use of test methods defined in this Recommendation in a context such
- as certification should carefully consider the constraints on applicability and the
- benefits of the different possible test methods.
-
- Testing is voluntary as far as ISO/CCITT is concerned. Requirements for testing
- in procurement and other external contracts are not a matter for standardization.
-
- 1. Scope and field of application
-
- 1.1 This Recommendation specifies a general methodology for testing the conformance
- to OSI* protocol Recommendation(s)* of products in which the Recommendation(s)* are
- claimed to be implemented. The methodology also applies to testing conformance to
- transfer syntax Recommendation(s)* to the extent that can be determined by testing each
- in combination with a specific OSI* protocol.
-
- 1.2 This Recommendation is structured into two separate parts:
-
- Part 1 identifies the different phases of conformance testing process, these
- phases being characterized by four major roles. These roles are:
-
- a) the specification of abstract test suites for particular OSI* protocols;
-
- b) the derivation of executable test suites and associated testing tools;
-
- c) the role of a client of a test laboratory, having an implementation of
- OSI* protocols to be tested;
-
- d) the operation of conformance testing, culminating in the production of a
- conformance test report which gives the results in terms of the
- Recommendation(s)* and the test suite(s) used.
-
- Additionally, this Part provides tutorial material, together with definition of
- concepts and terms.
-
- Part 2 defines the requirements and guidance for the specification of abstract
- test suites for OSI* protocols.
-
- 1.3 In both parts of this Recommendation, the scope is limited to include only such
- information as is necessary to meet the following objectives:
-
- a) to achieve an adequate level of confidence in the tests as a guide to
- conformance;
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 6 -
- AP IX-43-E
-
-
-
- b) to achieve comparability between the results of the corresponding tests
- applied in different places at different times;
-
- c) to facilitate communication between the parties responsible for the roles
- described above.
-
- 1.4 One such aspect of this scope involves the framework for development of OSI*
- test suites. For example:
-
- a) how they should relate to the various types of conformance requirement;
-
- b) the types of test to be standardized and the types not needing
- standardization;
-
- c) the criteria for selecting tests for inclusion in a conformance test
- suite;
-
- d) the notation to be used for defining tests;
-
- e) the structure of a test suite.
-
- 1.5 Certification, an administrative procedure which may follow conformance testing,
- is outside the scope of this Recommendation. Requirements for procurement and contracts
- are also outside the scope of this Recommendation.
-
- 1.6 The Physical layer and Media Access Control protocols are outside the field of
- application of this Recommendation.
-
- 2. References
-
- Recommendation X.200, Reference Model of Open Systems Interconnection for CCITT
- Applications. (See also ISO 7498)
-
- Recommendation X.210, Open Systems Interconnection Layer Service Definition
- Conventions. (See also ISO TR 8509)
-
- Recommendation X.209, Specification of Basic Encoding Rules for Abstract Syntax
- Notation One (ASN.1). (See also ISO 8825)
-
- Section 1: Terminology
-
- 3. Definitions
-
- 3.1 Reference model definitions
-
- This Recommendation is based upon the concepts developed in Reference Model of
- Open Systems Interconnection for CCITT Applications (CCITT X.200), and makes use of the
- following terms defined in that Recommendation:
-
- a) (N)-entity
- b) (N)-service
-
- c) (N)-layer
-
-
-
-
- (3184)
-
-
-
-
-
- - 7 -
- AP IX-43-E
-
-
- d) (N)-protocol
-
- e) (N)-service-access-point
-
- f) (N)-relay
-
- g) (N)-protocol-data-unit
-
- h) (N)-protocol-control-information
-
- i) (N)-user-data
-
- j) real open system
-
- k) subnetwork
-
- l) application-entity
-
- m) application-service-element
-
- n) transfer syntax
-
- o) Physical layer
-
- p) Data link layer
-
- q) Network layer
-
- r) Transport layer
- s) Session layer
-
- t) Presentation layer
-
- u) Application layer
-
- v) systems-management
-
- w) application-management
-
- x) layer-management
-
- 3.2 Terms defined in other Recommendations
-
- This Recommendation uses the following terms defined in the OSI Service
- Conventions (Recommendation X.210):
-
- a) service-user
-
- b) service-provider
-
- This Recommendation uses the following term defined in the ASN.1 - Basic
- Encoding Rules Recommendation (Recommendation X.209):
-
- c) encoding
-
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 8 -
- AP IX-43-E
-
-
- 3.3 Conformance testing definitions
-
- For the purpose of this Recommendation the definitions in 3.4 to 3.8 apply.
-
- 3.4 Basic terms
-
- 3.4.1 Implementation under test (IUT)
-
- That part of a real open system which is to be studied by testing, which should
- be an implementation of one or more OSI* protocols in an adjacent user/provider
- relationship.
-
- 3.4.2 System under test (SUT)
-
- The real open system in which the IUT resides.
-
- 3.4.3 Dynamic conformance requirements
-
- All those requirements (and options) which determine what observable behaviour is
- permitted by the relevant OSI* Recommendation(s)* in instances of communication.
-
- 3.4.4 Static conformance requirements
-
- Constraints which are placed in OSI* Recommendations* to facilitate interworking
- by defining the requirements for the capabilities of an implementation.
-
- Note - Static conformance requirements may be at a broad level, such as the grouping of
- functional units and options into protocol classes, or at a detailed level, such as the
- ranges of values that are to be supported for specific parameters or timers.
-
- 3.4.5 Capabilities of an IUT
-
- That set of functions and options in the relevant protocol(s) and, if
- appropriate, that set of facilities and options of the relevant service definition
- which are supported by the IUT.
-
- 3.4.6 Protocol implementation conformance statement (PICS)
-
- A statement made by the supplier of an OSI* implementation or system, stating
- the capabilities and options which have been implemented, and any features which have
- been omitted.
-
- 3.4.7 PICS proforma
-
- A document, in the form of a questionnaire, designed by the protocol specifier
- or conformance test suite specifier, which when completed for an OSI* implementation or
- system becomes the PICS.
-
- 3.4.8 Protocol implementation extra information for testing (PIXIT)
-
- A statement made by a supplier or implementor of an IUT which contains or
- references all of the information (in addition to that given in the PICS) related to
- the IUT and its testing environment, which will enable the test laboratory to run the
- appropriate test suite against the IUT.
-
-
-
-
- (3184)
-
-
-
-
-
- - 9 -
- AP IX-43-E
-
-
- 3.4.9 PIXIT proforma
-
- A document, in the form of a questionnaire, provided by the test laboratory,
- which when completed during the preparation for testing becomes a PIXIT.
-
- 3.4.10 Conforming implementation
-
- An IUT which is shown to satisfy both static and dynamic conformance
- requirements, consistent with the capabilities stated in the PICS.
-
- 3.4.11 System conformance statement
-
- A document summarizing which OSI* Recommendations* are implemented and to which
- conformance is claimed.
-
- 3.4.12 Client
-
- The organization that submits a system or implementation for conformance
- testing.
-
- 3.4.13 Test laboratory
-
- An organization that carries out conformance testing. This can be a third
- party, a user organization, an administration*, or an identifiable part of the supplier
- organization.
-
- 3.5 Types of testing
-
- 3.5.1 Active testing
-
- The application of a test suite to a SUT, under controlled conditions, with the
- intention of observing the consequent actions of the IUT.
-
- 3.5.2 Passive testing
-
- The observation of PDU activity on a link, and checking whether or not the
- observed behaviour is allowed by the relevant Recommendation(s)*.
-
- 3.5.3 Multi-layer testing
-
- Testing the behaviour of a multi-layer IUT as a whole, rather than testing it
- layer by layer.
-
- 3.5.4 Embedded testing
-
- Testing the behaviour of a single layer within a multi-layer IUT without
- accessing the layer boundaries for that layer within the IUT.
-
- 3.5.5 Basic interconnection testing
-
- Limited testing of an IUT to determine whether or not there is sufficient
- conformance to the main features of the relevant protocol(s) for interconnection to be
- possible, without trying to perform thorough testing.
-
- 3.5.6 Capability testing
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 10 -
- AP IX-43-E
-
-
-
- Testing to determine the capabilities of an IUT.
-
- Note - This involves checking all mandatory capabilities and those optional ones that
- are stated in the PICS as being supported, but not checking those optional ones which
- are stated in the PICS as not supported by the IUT.
-
- 3.5.7 Static conformance review
-
- A review of the extent to which the static conformance requirements are met by
- the IUT, by comparing the static conformance requirements expressed in the relevant
- Recommendation(s)* with the PICS and the results of any associated capability testing.
-
- 3.5.8 Behaviour testing
-
- Testing the extent to which the dynamic conformance requirements are met by the
- IUT.
-
- 3.5.9 Conformance testing
-
- Testing the extent to which an IUT is a conforming implementation.
-
- 3.5.10 Conformance assessment process
-
- The complete process of accomplishing all conformance testing activities
- necessary to enable the conformance of an implementation or a system to one or more
- OSI* Recommendations* to be assessed. It includes the production of the PICS and PIXIT
- documents, preparation of the real tester and the SUT, the execution of one or more
- test suites, the analysis of the results and the production of the appropriate system
- and protocol conformance test reports.
-
- 3.6 Terminology of test suites
-
- 3.6.1 Abstract test method
-
- The description of how an IUT is to be tested, given at an appropriate level of
- abstraction to make the description independent of any particular implementation of
- testing tools, but with enough detail to enable tests to be specified for this method.
-
- 3.6.2 Abstract testing methodology
-
- An approach to describing and categorizing abstract test methods.
-
- 3.6.3 Abstract test case
-
- A complete and independent specification of the actions required to achieve a
- specific test purpose, defined at the level of abstraction of a particular abstract
- test method. It includes a preamble and a postamble to ensure starting and ending in a
- stable state (i.e., a state which can be maintained almost indefinitely, such as the
- "idle" state or "data transfer" state) and involves one or more consecutive or
- concurrent connections.
-
- Note 1 - The specification should be complete in the sense that it is sufficient to
- enable a verdict to be assigned unambiguously to each potentially observable outcome
- (i.e., sequence of test events).
-
-
-
- (3184)
-
-
-
-
-
- - 11 -
- AP IX-43-E
-
-
-
- Note 2 - The specification should be independent in the sense that it should be
- possible to execute the derived executable test case in isolation from other such test
- cases (i.e., the specification should always include the possibility of starting and
- finishing in the "idle" state - that is without any existing connections except
- permanent ones). For some test cases, there may be pre- requisites in the sense that
- execution might require some specific capabilities of the IUT, which should have been
- confirmed by results of the test cases executed earlier.
-
- 3.6.4 Executable test case
-
- A realization of an abstract test case.
-
- Note - In general the use of the word "test" will imply its normal English meaning.
- Sometimes it may be used as an abbreviation for abstract test case or executable test
- case. The context should make the meaning clear.
-
- 3.6.5 Test purpose
-
- A description of the objective which an abstract test case is designed to
- achieve.
-
- 3.6.6 Generic test case
-
- A specification of the actions required to achieve a specific test purpose,
- defined by a test body together with a description of the initial state in which the
- test body is to start.
-
- 3.6.7 Preamble
-
- The test steps needed to define the path from the starting stable state of the
- test case up to the initial state from which the test body will start.
-
- 3.6.8 Test body
-
- The set of test steps that are essential in order to achieve the test purpose
- and assign verdicts to the possible outcomes.
-
- 3.6.9 Postamble
-
- The test steps needed to define the paths from the end of the test body up to
- the finishing stable state for the test case.
-
- 3.6.10 Test step
-
- A named subdivision of a test case, constructed from test events and/or other
- test steps, and used to modularize abstract test cases.
-
- 3.6.11 Test event
-
- An indivisible unit of test specification at the level of abstraction of the
- specification (e.g., sending or receiving a single PDU).
-
- 3.6.12 Test suite
-
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 12 -
- AP IX-43-E
-
-
- A complete set of test cases, possibly combined into nested test groups, that is
- necessary to perform conformance testing or basic interconnection testing for an IUT or
- protocol within an IUT.
-
- 3.6.13 Test case
-
- A generic, abstract or executable test case.
-
- 3.6.14 Test group
-
- A named set of related test cases.
-
- 3.6.15 Generic test suite
-
- A test suite composed of generic test cases, with the same coverage as the
- complete set of test purposes for the particular protocol, this being the set or a
- superset of the test purposes of any particular abstract test suite for the same
- protocol.
-
- 3.6.16 Abstract test suite
-
- A test suite composed of abstract test cases.
-
- 3.6.17 Executable test suite
-
- A test suite composed of executable test cases.
-
- 3.6.18 Conformance test suite
-
- A test suite for conformance testing of one or more OSI* protocols.
-
- Note - It should cover both capability testing and behaviour testing. It may be
- qualified by the adjectives: abstract, generic or executable, as appropriate. Unless
- stated otherwise, an "abstract test suite" is meant.
-
- 3.6.19 Basic interconnection test suite
-
- A test suite for basic interconnection testing of one or more OSI* protocols.
-
- 3.6.20 Selected abstract test suite
-
- The subset of an abstract test suite selected using a specific PICS.
-
- 3.6.21 Selected executable test suite
-
- The subset of an executable test suite selected using a specific PICS and
- corresponding to a selected abstract test suite.
-
- 3.6.22 Parameterized abstract test case
-
- An abstract test case in which all appropriate parameters have been supplied
- with values in accordance with a specific PICS and PIXIT.
-
- 3.6.23 Parameterized executable test case
-
-
-
-
- (3184)
-
-
-
-
-
- - 13 -
- AP IX-43-E
-
-
- An executable test case in which all appropriate parameters have been supplied
- with values in accordance with a specific PICS and PIXIT.
-
- 3.6.24 Parameterized abstract test suite
-
- A selected abstract test suite in which all test cases have been made
- parameterized abstract test cases for the appropriate PICS and PIXIT.
-
- 3.6.25 Parameterized executable test suite
-
- A selected executable test suite in which all test cases have been made
- parameterized executable test cases for the appropriate PICS and PIXIT, and
- corresponding to a parameterized abstract test suite.
-
- 3.7 Terminology of results
-
- 3.7.1 Repeatability (of results)
-
- Characteristic of a test case, such that repeated executions on the same IUT
- lead to the same verdict, and by extension a characteristic of a test suite.
-
- 3.7.2 Comparability (of results)
-
- Characteristic of conformance assessment processes, such that their execution on
- the same IUT, in different test environments, leads to the same overall summary.
-
- 3.7.3 Outcome
-
- A sequence of test events together with the associated input/output, either
- identified by an abstract test case specifier, or observed during test execution.
-
- 3.7.4 Foreseen outcome
-
- An outcome identified or categorized in the abstract test case specification.
-
- 3.7.5 Unforeseen outcome
-
- An outcome not identified or categorized in the abstract test case
- specification.
-
- 3.7.6 Verdict
-
- Statement of "pass", "fail" or "inconclusive" concerning conformance of an IUT
- with respect to a test case that has been executed and which is specified in the
- abstract test suite.
-
- 3.7.7 System conformance test report (SCTR)
-
- A document written at the end of the conformance assessment process, giving the
- overall summary of the conformance of the system to the set of protocols for which
- conformance testing was carried out.
-
- 3.7.8 Protocol conformance test report (PCTR)
-
- A document written at the end of the conformance assessment process, giving the
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 14 -
- AP IX-43-E
-
-
- details of the testing carried out for a particular protocol, including the
- identification of the abstract test cases for which corresponding executable test cases
- were run and for each test case the test purpose and verdict.
-
- 3.7.9 Valid test event
-
- A test event which is allowed by the protocol Recommendation*, being both
- syntactically correct and occurring or arriving in an allowed context in an observed
- outcome.
-
- 3.7.10 Syntactically invalid test event
-
- A test event which syntactically is not allowed by the protocol Recommendation*.
-
- Note - The use of "invalid test event" is deprecated.
-
- 3.7.11 Inopportune test event
-
- A test event which, although syntactically correct, occurs or arrives at a point
- in an observed outcome when not allowed to do so by the protocol Recommendation*.
-
- 3.7.12 "Pass" verdict
-
- A verdict given when the observed outcome satisfies the test purpose and is
- valid with respect to the relevant Recommendation(s)* and with respect to the PICS.
-
- 3.7.13 "Fail" verdict
-
- A verdict given when the observed outcome is syntactically invalid or
- inopportune with respect to the relevant Recommendation(s)* or the PICS.
-
- 3.7.14 "Inconclusive" verdict
-
- A verdict given when the observed outcome is valid with respect to the relevant
- Recommendation(s)* but prevents the test purpose from being accomplished.
-
- 3.7.15 Conformance log
-
- A record of sufficient information necessary to verify verdict assignments as a
- result of conformance testing.
-
- 3.8 Terminology of test methods
-
- 3.8.1 Point of control and observation (PCO)
-
- A point at which control and observation is specified in a test case.
-
- 3.8.2 Lower tester
-
- The abstraction of the means of providing, during test execution, control and
- observation at the appropriate PCO either below the IUT or remote from the IUT, as
- defined by the chosen abstract test method.
-
- 3.8.3 Upper tester
-
-
-
-
- (3184)
-
-
-
-
-
- - 15 -
- AP IX-43-E
-
-
- The abstraction of the means of providing, during test execution, control and
- observation of the upper service boundary of the IUT, plus the control and observation
- of any relevant abstract local primitive.
-
- 3.8.4 Abstract (N)-service-primitive ((N)-ASP)
-
- An implementation independent description of an interaction between a service-
- user and a service-provider at an (N)-service boundary, as defined in an OSI* service
- definition Recommendation*.
-
- 3.8.5 Abstract local primitive (ALP)
-
- An abbreviation for a description of control and/or observation to be performed
- by the upper tester, which cannot be described in terms of ASPs but which relates to
- events or states defined within the protocol Recommendation(s)* relevant to the IUT.
-
- Note - The PIXIT will indicate whether or not a particular ALP can be realized within
- the SUT. The ability of the SUT to support particular ALPs as specified in the PIXIT
- will be used as a criterion in the test selection process.
-
- 3.8.6 Test coordination procedures
-
- The rules for cooperation between the lower and upper testers during testing.
-
- 3.8.7 Test management protocol
-
- A protocol which is used as a realization of the test coordination procedures
- for a particular test suite.
-
- 3.8.8 Local test methods
-
- Abstract test methods in which the PCOs are directly at the layer boundaries of
- the IUT.
-
- 3.8.9 External test methods
-
- Abstract test methods in which the lower tester is separate from the SUT and
- communicates with it via an appropriate lower layer service-provider.
-
- Note - The service-provider is immediately beneath the (lowest layer) protocol which is
- the focus of the testing, and may involve multiple OSI layers.
-
- 3.8.10 Distributed test method
-
- An external test method in which there is a PCO at the layer boundary at the top
- of the IUT.
-
- 3.8.11 Coordinated test method
-
- An external test method for which a standardized test management protocol is
- defined as the realization of the test coordination procedures, enabling the control
- and observation to be specified solely in terms of the lower tester activity, including
- the control and observation of test management PDUs.
-
- 3.8.12 Remote test method
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 16 -
- AP IX-43-E
-
-
-
- An external method in which there is neither a PCO above the IUT nor a
- standardized test management protocol; some requirements for test coordination
- procedures may be implied or informally expressed in the abstract test suite but no
- assumption is made regarding their feasibility or realization.
-
- 3.8.13 Real tester
-
- The realization of the lower tester, plus either the definition or the
- realization of the upper tester, plus the definition of the test coordination
- procedures, as appropriate to a particular test method.
-
- 3.8.14 Test realizer
-
- An organization which takes responsibility for providing, in a form independent
- of client and IUT, the means of testing IUTs in conformance with the abstract test
- suite.
-
- 4. Abbreviations
-
- For the purposes of this Recommendation the following abbreviations apply.
-
-
- Administration*: Administration or recognized private operating
- agency.
-
- ALP: abstract local primitive
-
- ASP: abstract service primitive
-
- DTE: data terminal equipment
-
- IUT: implementation under test
-
- OSI: open systems interconnection
-
- OSI*: OSI or related CCITT X-Series or T-Series Recommendations
-
- PCO: point of control and observation
-
- PCTR: protocol conformance test report
-
- PDU: protocol data unit
-
- PICS:protocol implementation conformance statement
-
- PIXIT: protocol implementation extra information for testing
-
- BBSAP: service access point
-
- SCTR:system conformance test report
-
- Recommendation*: Standard or Recommendation
-
- SUT: system under test
-
-
-
- (3184)
-
-
-
-
-
- - 17 -
- AP IX-43-E
-
-
-
- TM-PDU: test management PDU
-
- Section 2: Overview
-
- 5. The meaning of conformance in OSI*
-
- 5.1 Introduction
-
- In the context of OSI*, a real system is said to exhibit conformance if it
- complies with the requirements of applicable OSI* Recommendations* in its
- communication with other real systems.
-
- Applicable OSI* Recommendations* include protocol Recommendations*, and
- transfer syntax Recommendations* inasmuch as they are implemented in conjunction
- with protocols.
-
- OSI* Recommendations* form a set of interrelated Recommendations* which
- together define behaviour of open systems in their communication. Conformance of a
- real system will, therefore, be expressed at two levels, conformance to each
- individual Recommendation*, and conformance to the set.
-
- Note - If the implementation is based on a predefined set of Recommendations*, often
- referred to as a functional standard or profile, the concept of conformance can be
- extended to specific requirements expressed in the functional standard or profile,
- as long as they do not conflict with the requirements of the base Recommendations*.
-
- 5.2 Conformance requirements
-
- 5.2.1 The conformance requirements in a Recommendation* can be:
-
- a) mandatory requirements: these are to be observed in all cases;
-
- b) conditional requirements: these are to be observed if the conditions
- set out in the Recommendation* apply;
-
- c) options: these can be selected to suit the implementation, provided
- that any requirements applicable to the option are observed. More
- information on options is provided in Annex A.
-
- For example, CCITT essential facilities are mandatory requirements;
- additional facilities can be either conditional or optional requirements.
-
- Note - The CCITT terms "essential facilities" and "additional facilities" need to be
- considered in the context of the scope of the CCITT Recommendation concerned; in
- many cases, essential facilities are mandatory for networks but not for DTEs.
-
- 5.2.2 Furthermore, conformance requirements in a Recommendation* can be stated
-
- a) positively: they state what shall be done;
-
- b) negatively (prohibitions): they state what shall not be done.
-
- 5.2.3 Finally, conformance requirements fall into two groups:
-
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 18 -
- AP IX-43-E
-
-
- a) static conformance requirements;
-
- b) dynamic conformance requirements.
-
- These are discussed in 5.3. and 5.5, respectively.
-
- 5.3 Static conformance requirements
-
- Static conformance requirements are those that define the allowed minimum
- capabilities of an implementation, in order to facilitate interworking. These
- requirements may be at a broad level, such as the grouping of functional units and
- options into protocol classes, or at a detailed level, such as a range of values
- that have to be supported for specific parameters of timers.
-
- Static conformance requirements and options in OSI* Recommendations* can be
- of two varieties:
-
- a) those which determine the capabilities to be included in the
- implementation of the particular protocol;
-
- b) those which determine multi-layer dependencies, e.g., those which place
- constraints on the capabilities of the underlying layers of the system
- in which the protocol implementation resides. These are likely to be
- found in upper layer Recommendations*.
-
- All capabilities not explicitly stated as static conformance requirements
- are to be regarded as optional.
-
- 5.4 Protocol implementation conformance statement (PICS)
-
- To evaluate the conformance of a particular implementation, it is necessary
- to have a statement of the capabilities and options which have been implemented, and
- any features which have been omitted, so that the implementation can be tested for
- conformance against relevant requirements, and against those requirements only. Such
- a statement is called a Protocol Implementation Conformance Statement (PICS).
-
- In a PICS there should be a distinction between the following categories of
- information which it may contain:
-
- a) information related to the mandatory, optional and conditional static
- conformance requirements of the protocol itself;
-
- b) information related to the mandatory, optional and conditional static
- conformance requirements for multi-layer dependencies.
-
- If a set of interrelated OSI* protocol Recommendations* has been implemented
- in a system, a PICS is needed for each protocol. A System Conformance Statement will
- also be necessary, summarizing all protocols in the system for each of which a
- distinct PICS is provided.
-
- 5.5 Dynamic conformance requirements
-
- Dynamic conformance requirements are all those requirements (and options)
- which determine what observable behaviour is permitted by the relevant OSI*
- Recommendation(s)* in instances of communication. They form the bulk of each OSI*
-
-
-
- (3184)
-
-
-
-
-
- - 19 -
- AP IX-43-E
-
-
- protocol Recommendation*. They define the set of allowable behaviours of an
- implementation or real system. This set defines the maximum capability that a
- conforming implementation or real system can have within the terms of the OSI*
- protocol Recommendation*.
-
- A system exhibits dynamic conformance in an instance of communication if its
- behaviour is a member of the set of all behaviours permitted by the relevant OSI*
- protocol Recommendation(s)* in a way which is consistent with the PICS.
-
- 5.6 A conforming system
-
- A conforming system or implementation is one which is shown to satisfy both
- static and dynamic conformance requirements, consistent with the capabilities stated
- in the PICS, for each protocol declared in the System Conformance Statement.
-
- 5.7 Interworking and conformance
-
- 5.7.1 The primary purpose of conformance testing is to increase the probability
- that different implementations are able to interwork.
-
- Successful interworking of two or more real open systems is more likely to
- be achieved if they all conform to the same subset of an OSI* Recommendation*, or to
- the same selection of OSI* Recommendations*, than if they do not.
-
- In order to prepare two or more systems to interwork successfully, it is
- recommended that a comparison be made of the System Conformance Statements and PICSs
- of these systems.
-
- If there is more than one version of a relevant OSI* Recommendation*
- indicated in the PICSs, the differences between the versions need to be identified
- and their implications for consideration, including their use in combination with
- other Recommendations*.
-
- 5.7.2 While conformance is a necessary condition, it is not on its own a
- sufficient condition to guarantee interworking capability. Even if two
- implementations conform to the same OSI* protocol Recommendation*, they may fail to
- interwork because of factors outside the scope of that Recommendation.
-
- Trial interworking is recommended in order to detect these factors. Further
- information to assist interworking between two systems can be obtained by extending
- the PICS comparison to other relevant information, including test reports and PIXIT
- (see clause 6.2). The comparison can focus on:
-
- a) additional mechanisms claimed to work around known ambiguities or
- deficiencies not yet corrected in the Recommendations* or in peer real
- systems, e.g., solution of multi-layer problems;
-
- b) selection of free options which are not taken into account in the
- static conformance requirements of the Recommendations*;
-
- c) the existence of timers not specified in the Recommendation* and their
- associated values.
-
- Note - The comparison can be made between two individual systems, between two or
- more types of product, or, for the PICS comparison only, between two or more
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 20 -
- AP IX-43-E
-
-
- specifications for procurement, permissions to connect, etc.
-
- 6. Conformance and testing
-
- 6.1 Objectives of conformance testing
-
- 6.1.1 Introduction
-
- Conformance testing as discussed in this Recommendation is focused on
- testing for conformance to OSI* protocol Recommendations*. However, it also applies
- to testing for conformance to OSI* transfer syntax Recommendations*, to the extent
- that this can be carried out by testing the transfer syntax in combination with an
- OSI* protocol.
-
- In principle, the objective of conformance testing is to establish whether
- the implementation being tested conforms to the specification in the relevant
- Recommendation*. Practical limitations make it impossible to be exhaustive, and
- economic considerations may restrict testing still further.
-
- Therefore, this Recommendation distinguishes four types of testing,
- according to the extent to which they provide an indication of conformance:
-
- a) basic interconnection tests, which provide prima facie evidence that an
- IUT conforms;
-
- b) capability tests, which check that the observable capabilities of the
- IUT are in accordance with the static conformance requirements and the
- capabilities claimed in the PICS;
-
- c) behaviour tests, which endeavour to provide testing which is as
- comprehensive as possible over the full range of dynamic conformance
- requirements specified by the Recommendation*, within the capabilities
- of the IUT;
-
- d) conformance resolution tests, which probe in depth the conformance of
- an IUT to particular requirements, to provide a definite yes/no answer
- and diagnostic information in relation to specific conformance issues,
- such tests are not standardized.
-
- 6.1.2 Basic interconnection tests
-
- 6.1.2.1Basic interconnection tests provide limited testing of an IUT in relation to
- the main features in a Recommendation*, to establish that there is sufficient
- conformance for interconnection to be possible, without trying to perform thorough
- testing.
-
- 6.1.2.2Basic interconnection tests are appropriate:
-
- a) for detecting severe cases of non-conformance;
-
- b) as a preliminary filter before undertaking more costly tests;
-
- c) to give a prima facie indication that an implementation which has
- passed full conformance tests in one environment still conforms in a
- new environment (e.g., before testing an (N)-implementation, to check
-
-
-
- (3184)
-
-
-
-
-
- - 21 -
- AP IX-43-E
-
-
- that a tested (N-1)-implementation has not undergone any severe change
- due to being linked to the (N)-implementation);
-
- d) for use by users of implementations, to determine whether the
- implementations appear to be usable for communication with other
- conforming implementations, e.g., as a preliminary to data interchange.
-
- 6.1.2.3Basic interconnection tests are inappropriate:
-
- a) as a basis for claims of conformance by the supplier of an
- implementation;
-
- b) as a means of arbitration to determine causes for communications
- failure.
-
- 6.1.2.4Basic interconnection tests should be standardized as either a very small
- test suite or a subset of a conformance test suite (including capability and
- behaviour tests). They can be used on their own or together with a conformance test
- suite. The existence and execution of basic interconnection tests are optional.
-
- 6.1.3 Capability tests
-
- 6.1.3.1Capability tests provide limited testing of each of the static conformance
- requirements in a Recommendation*, to ascertain what capabilities of the IUT can be
- observed and to check that those observable capabilities are valid with respect to
- the static conformance requirements and the PICS.
-
- 6.1.3.2Capability tests are appropriate:
-
- a) to check as far as possible the consistency of the PICS with the IUT;
-
- b) as a preliminary filter before undertaking more in-depth and costly
- testing;
-
- c) to check that the capabilities of the IUT are consistent with the
- static conformance requirements;
-
- d) to enable efficient selection of behaviour tests to be made for a
- particular IUT;
-
- e) when taken together with behaviour tests, as a basis for claims of
- conformance.
-
- 6.1.3.3Capability tests are inappropriate:
-
- a) on their own, as a basis for claims of conformance by the supplier of
- an implementation;
-
- b) for testing in detail the behaviour associated with each capability
- which has been implemented or not implemented;
-
- c) for resolution of problems experienced during live usage or where other
- tests indicate possible non-conformance even though the capability
- tests have been satisfied.
-
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 22 -
- AP IX-43-E
-
-
- 6.1.3.4Capability tests are standardized within a conformance test suite. They can either
- be separated into their own test group(s) or merged with the behaviour tests.
-
- 6.1.4 Behaviour tests
-
- 6.1.4.1Behaviour tests test an implementation as thoroughly as is practical, over the
- full range of dynamic conformance requirements specified in a Recommendation*. Since the
- number of possible combinations of events and timing of events is infinite, such testing
- cannot be exhaustive. There is a further limitation, namely that these tests are designed
- to be run collectively in a single test environment, so that any faults which are
- difficult or impossible to detect in that environment are likely to be missed. Therefore,
- it is possible that a non-conforming implementation passes the conformance test suite;
- one aim of the test suite design is to minimize the number of times that this occurs.
-
- 6.1.4.2Behaviour tests are appropriate, when taken together with capability tests, as a
- basis for the conformance assessment process.
-
- 6.1.4.3Behaviour tests are inappropriate for resolution of problems experienced during
- live usage or where other tests indicate possible non- conformance even though the
- behaviour tests have been satisfied.
-
- 6.1.4.4Behaviour tests are standardized as the bulk of a conformance test suite.
-
- Note - Behaviour tests include tests for valid behaviour by the IUT in response to valid,
- inopportune and syntactically invalid protocol behaviour by the real tester. This
- includes testing the rejection by the IUT of attempts to use features (capabilities)
- which are stated in the PICS as being not implemented. Thus, capability tests do not need
- to include tests for capabilities omitted from the PICS.
-
- 6.1.5 Conformance resolution tests
-
- 6.1.5.1Conformance resolution tests provide diagnostic answers, as near to definitive as
- possible, to the resolution of whether an implementation satisfies particular
- requirements. Because of the problems of exhaustiveness noted in 6.1.4.1, the definite
- answers are gained at the expense of confining tests to a narrow field.
-
- 6.1.5.2The test architecture and test method will normally be chosen specifically for the
- requirements to be tested, and need not be ones that are generally useful for other
- requirements. They may even be ones that are regarded as being unacceptable for
- (standardized) abstract conformance test suites, e.g., involving implementation specific
- methods using, say, the diagnostic and debugging facilities of the specific operating
- system.
-
- 6.1.5.3The distinction between behaviour tests and conformance resolution tests may be
- illustrated by the case of an event such as a Reset. The behaviour tests may include only
- a representative selection of conditions under which a Reset might occur, and may fail to
- detect incorrect behaviour in other
- circumstances. The conformance resolution tests would be confined to conditions under
- which incorrect behaviour was already suspected to occur, and would confirm whether or
- not the suspicions were correct.
-
- 6.1.5.4Conformance resolution tests are appropriate:
-
- a) for providing a yes/no answer in a strictly confined and previously
- identified situation (e.g., during implementation development, to check
-
-
-
- (3184)
-
-
-
-
-
- - 23 -
- AP IX-43-E
-
-
- whether a particular feature has been correctly implemented, or during
- operational use, to investigate the cause of problems);
-
- b) as a means for identifying and offering resolutions for deficiencies in a
- current conformance test suite.
-
- 6.1.5.5Conformance resolution tests are inappropriate as a basis for judging whether or
- not an implementation conforms overall.
-
- 6.1.5.6Conformance resolution tests are not standardized.
-
- Note on 6.1 - As a by-product of conformance testing, errors and deficiencies in protocol
- Recommendations* may be identified.
-
- 6.2 Protocol implementation extra information for testing (PIXIT)
-
- In order to test a protocol implementation, the test laboratory will require
- information relating to the IUT and its testing environment in addition to that provided
- by the PICS. This "Protocol Implementation eXtra Information for Testing" (PIXIT) will be
- provided by the client submitting the implementation for testing, as a result of
- consultation with the test laboratory.
-
- The PIXIT may contain the following information:
-
- a) information needed by the test laboratory in order to be able to run the
- appropriate test suite on the specific system (e.g., information related to
- the test method to be used to run the test cases, addressing information);
-
- b) information already mentioned in the PICS and which needs to be made precise
- (e.g., a timer value range which is declared as a parameter in the PICS
- should be specified in the PIXIT);
-
- c) information to help determine which capabilities stated in the PICS as being
- supported are testable and which are untestable;
-
- d) other administrative matters (e.g., the IUT identifier, reference to the
- related PICS).
-
- The PIXIT should not conflict with the appropriate PICS.
-
- The abstract test suite specifier, test realizer and test laboratory will all
- contribute to the development of the PIXIT proforma.
-
- 6.3 Conformance assessment process outline
-
- 6.3.1 The main feature of the conformance assessment process is a configuration of
- equipment allowing exchanges of information between the IUT and a real tester. These are
- controlled and observed by the real tester.
-
- 6.3.2 In conceptual outline, conformance testing should include several steps, involving
- both static conformance reviews and live testing phases, culminating in the production of
- a test report which is as thorough as is practical.
-
- 6.3.3 These steps are:
-
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 24 -
- AP IX-43-E
-
-
- a) analysis of the PICS;
-
- b) test selection and parameterization;
-
- c) basic interconnection testing (optional);
-
- d) capability testing;
-
- e) behaviour testing;
-
- f) review and analysis of test results;
-
- g) synthesis, conclusions and conformance test report production.
-
- These are illustrated in Figure 1.
-
- Prior to the execution of any of the tests, the IUT's PICS and PIXIT are input to
- the test case selection and parameterization process.
-
- 6.4 Analysis of results
-
- 6.4.1 General
-
- 6.4.1.1Outcomes and verdicts
-
- The observed outcome (of the test execution) is the series of events which
- occurred during execution of a test case; it includes all input to and output from the
- IUT at the points of control and observation.
-
- The foreseen outcomes are identified and defined by the abstract test case
- specification taken in conjunction with the protocol Recommendation*. For each test case,
- there may be one or more foreseen outcome(s). Foreseen outcomes are defined primarily in
- abstract terms.
-
- A verdict is a statement of pass, fail or inconclusive to be associated with every
- foreseen outcome in the abstract test suite specification.
-
- The analysis of results is performed by comparing the observed outcomes with
- foreseen outcomes.
-
- The verdict assigned to an observed outcome is that associated with the matching
- foreseen outcome. If the observed outcome is unforeseen then the abstract test suite
- specification will state what default verdict shall be assigned.
-
- The means by which the comparison of the observed outcomes with the foreseen
- outcomes is made is outside the scope of this Recommendation.
-
- Note - Amongst the possibilities are:
-
- a) manual or automated comparison (or a mixture);
-
- b) comparison at or after execution time;
-
- c) translating the observed outcomes into abstract terms for comparison with
- the foreseen outcomes or translating the foreseen outcomes into the terms
-
-
-
- (3184)
-
-
-
-
-
- - 25 -
- AP IX-43-E
-
-
- used to record the observed outcomes.
-
- The verdict will be pass, fail or inconclusive:
-
- a) pass means that the observed outcome satisfies the test purpose and is valid
- with respect to the relevant Recommendation(s)* and with respect to the PICS;
-
- b) fail means that the observed outcome is syntactically invalid or inopportune
- with respect to the relevant Recommendation(s)* or the PICS;
-
- c) inconclusive means that the observed outcome is valid with respect to the
- relevant Recommendation(s)* but prevents the test purpose from being
- accomplished.
-
-
-
-
-
-
-
- The verdict assigned to a particular outcome will depend on the test purpose and
- the validity of the observed protocol behaviour.
-
- The verdicts made in respect of individual test cases will be synthesized into an
- overall summary for the IUT based on the test cases executed.
-
- 6.4.1.2Conformance test reports
-
- The results of conformance testing will be documented in a set of conformance test
- reports. These reports will be of two types: a System Conformance Test Report (SCTR), and
- a Protocol Conformance Test Report (PCTR).
-
- The SCTR, which will always be provided, gives an overall summary of the
- conformance status of the SUT, with respect to its single or multi-layer IUT. A standard
- proforma for the SCTR is for further study.
-
- The PCTR, one of which will be issued for each protocol tested in the SUT,
- documents all of the results of the test cases giving references to the conformance logs
- which contain the observed outcomes. The PCTR also gives reference to all necessary
- documents relating to the conduct of the conformance assessment process for that
- protocol.
-
- A standard proforma for the PCTR is for further study. The ordered list of test
- cases to be used in the PCTR will be specified in the conformance test suite
- Recommendation*.
-
- 6.4.2 Repeatability of results
-
- In order to achieve the objective of credible conformance testing, it is clear
- that the result of executing a test case on an IUT should be the same whenever it is
- performed. Statistically, it may not be possible to perform a complete conformance test
- suite and observe outcomes which are completely identical to those obtained on another
- occasion: unforeseen events do occur, and this is a feature of the environments involved.
- Nevertheless, at the test case level, it is very important that every effort is made by
- the test specifiers and test laboratories to minimize the possibility that a test case
-
-
-
- (3184)
-
-
-
-
-
-
-
-
- - 26 -
- AP IX-43-E
-
-
- produces different outcomes on different occasions.
-
- 6.4.3 Comparability of results
-
- In order to achieve the ultimate objectives of conformance testing, the overall
- summary concerning conformance of an IUT has to be independent of the test environment in
- which the testing takes place. That is to say, the standardization of all of the
- procedures concerned with conformance testing should result in a comparable overall
- summary being accorded to the IUT, whether the testing is done by the supplier, a user,
- or by any third party test house. There are a large number of factors to be studied to
- achieve this, of which some of the more important are:
-
- a) careful design of the abstract test case specification to give flexibility
- where appropriate, but show which requirements have to be met; (which is the
- subject of this Recommendation);
-
- b) careful specification of the real tester which should be used to run the
- test suite; again this specification should give flexibility where
- appropriate, but show which requirements have to be met, including all test
- coordination procedures (if any);
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- (3184)
-
-
-
-
-
- - 27 -
- AP IX-43-E
-
-
- c) careful specification of the procedure to be followed in determining how the
- contents of the PICS are to be used in the analysis of outcomes of test
- cases; there should be no room for "optimistic" interpretation;
-
- d) careful specification of the procedures to be followed by test laboratories
- as regards the repetition of a test case before making a final verdict for
- that test purpose;
-
- e) a proforma for a conformance test report;
-
- f) careful specification of the procedures necessary when synthesizing an
- overall summary.
-
- 6.4.4 Auditability of results
-
- For legal reasons, as well as others, it may be necessary to review the observed
- outcomes from the execution of a conformance test suite in order to make sure that all
- procedures have been correctly followed. Whether or not analysis has been carried out in
- a manual or automatic mode, it is essential that all inputs, outputs, and other test
- events are careful logged, and the analysis of the results recorded. In some cases this
- may be the responsibility of the test realizer, who may elect to include the test
- criteria in the conformance log, as well as all outcomes. In others, it may be the
- responsibility of the test laboratory, which might be required to follow all standard
- procedures concerning the recording of results.
-
- Note - As far as auditability is concerned, some automatic procedures would be preferred,
- but in the event it should be appreciated that from a legal standpoint such automatic
- procedures would have to be accredited themselves, if they are to be credible.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- (3184)
-
-
-
-
-
-